scenario-notebooks/Guided Hunting - Use Machine Learning to Detect Potential Low and Slow Password Sprays using Apache Spark via Azure Synapse.ipynb (2,598 lines of code) (raw):

{ "cells": [ { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# Guided Hunting - Use Machine Learning to Detect Potential Low and Slow Password Sprays using Apache Spark via Azure Synapse\n", "\n", "__Notebook Version:__ 1.0<br>\n", "__Python Version:__ Python 3.8<br>\n", "__Required Packages:__ azureml-synapse, msticpy, azure-storage-file-datalake <br>\n", "__Platforms Supported:__ Azure Machine Learning Notebooks connected to Azure Synapse Workspace\n", " \n", "__Data Source Required:__ Yes\n", "\n", "__Data Source:__ SigninLogs\n", "\n", "__Spark Version:__ 3.1 or above\n", " \n", "## Description\n", "This guided hunting notebook leverages machine learning to tackle the difficult problem of detecting low and slow password spray campaigns (This augments more broad-scoped password spray detection already provided via Microsoft’s Identity Protection Integration for Sentinel.)\n", "We leverage the built-in parallelism of PySpark and MLlib (via the Azure Synapse linked service) to ingest, query and analyse data at scale.\n", "\n", "Low and slow sprays are a variant on traditional password spray attacks that are being increasingly used by sophisticated adversaries.\n", "These adversaries can randomize client fields between each sign in attempt, including IP addresses, user agents and client application and are often willing to let the password spray campaigns run at a very low frequency over a period of months or years, making detection very challenging. \n", "A key observation that we exploit in this noteboo is the fact that, within a single campaign, attackers often randomize the same large number of properties simultaneously, resulting in a group of logins occurring periodically over a long period of time with same set of anomalous properties.\n", "\n", "This notebook runs through the following ML-driven approach to surfacing potential low and slow sprays. (For more details on the approach see the accompanying Microsoft Tech Community blog post: [Microsoft Sentinel Blog - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bg-p/MicrosoftSentinelBlog).)\n", "\n", "1.\t**Detect anomalous fields for each failed sign-in** attempt using successful sign-ins as a baseline\n", "2.\t**Use ML to cluster failed sign-ins** by the columns which were randomized/anomalous\n", "3.\t**Prune the clusters** from the previous step based on knowledge of what a low and slow spray looks like; for example, by removing clusters in which sign-ins do not occur at a steady frequency over an extended period of time\n", "4.\t**Further analyze the candidate password spray clusters** (using threat intelligence enrichments from msticpy, for example), to find any invariant properties within the clusters\n", "5.\t**Identify any successful sign-ins that follow the patterns** observed for each cluster from the previous step and create Sentinel incidents as appropriate\n", "\n", "**Related MITRE ATT&CK techniques:**\n", "- [T1110: Brute Force](https://attack.mitre.org/techniques/T1110/)\n", " - [T1110.003: Password Spraying](https://attack.mitre.org/techniques/T1110/003/)\n", " - [T1110.004: Credential Stuffing](https://attack.mitre.org/techniques/T1110/004/)\n", "- [T1078: Valid Accounts](https://attack.mitre.org/techniques/T1078/)\n", " - [T1078.004: Cloud Accounts](https://attack.mitre.org/techniques/T1078/004/)\n", " \n", "## Pre-Requisites\n", "\n", "1. This notebook also makes use of the Azure Synapse integration for Sentinel notebooks. To set up the Synapse integration, please use the notebook [Configurate Azure ML and Azure Synapse Analytics](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Configurate%20Azure%20ML%20and%20Azure%20Synapse%20Analytics.ipynb).\n", "2. Ensure that the `bayespy ~= 0.5.22` Python pacakge is installed on your Spark pool. You can do this by uploading a `requirements.txt` file as detailed in the [docs](https://docs.microsoft.com/azure/synapse-analytics/spark/apache-spark-manage-python-packages#pool-libraries).\n", "3. Ensure that Sentinel SigninLogs data has been exported to an appropriate ADLS storage container. To export the necessary data\n", " - Set up a continuous log export rule\n", " - Do a one-time export of historical data$^*$ \n", "\n", "$^*$: A walkthrough of the one-time export of historical log data is available in a TechCommunity blog post here: [Export Historical Data from Log Analytics (microsoft.com)](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/export-historical-log-data-from-microsoft-sentinel/ba-p/3413418).\n", "<br>\n", "The template notebook is available via the Sentinel UI or on GitHub: [Export Historical Log Data (GitHub)](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/scenario-notebooks/Export%20Historical%20Log%20Data.ipynb).\n", "\n", "**Python modules may need to be downloaded.** \n", "**Please run the cells sequentially to avoid errors. Please do not use \"run all cells\".**\n", "\n", "## Table of Contents\n", "1. Warm-up\n", "2. Authentication to Azure Resources\n", "3. Configure Azure ML and Azure Synapse Analytics\n", "4. Load the Data\n", "5. Data Cleansing using Spark\n", "6. Data Science using Spark\n", "6. Enriching the Results\n", "7. Conclusion\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# 1. Setup" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Install Packages\n", "> **Note**: Install below packages only for the first time and restart the kernel once done." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1657890880080 } }, "outputs": [], "source": [ "# Install AzureML Synapse package to use spark magics\n", "import sys\n", "!{sys.executable} -m pip install --upgrade azureml-synapse" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1657718929323 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Install Azure storage datalake library to manipulate file systems\n", "import sys\n", "!{sys.executable} -m pip install --upgrade azure-storage-file-datalake --pre" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1657719343783 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Install msticpy for enhanced security data analysis\n", "import sys\n", "!{sys.executable} -m pip install --upgrade msticpy[azure]" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "*** $\\color{red}{Note:~After~installing~the~packages,~please~restart~the~kernel.}$ ***\n", "\n", "## Initialize `msticpy`\n", "\n", "The `nbinit` module loads required libraries and optionally installs required packages." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1658243187835 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Load Python libraries that will be used in the non-Synapse portion of this notebook\n", "from datetime import timedelta, datetime\n", "import importlib\n", "import json\n", "\n", "from azureml.core import Workspace, LinkedService, SynapseWorkspaceLinkedServiceConfiguration, Datastore\n", "\n", "from azure.storage.filedatalake import DataLakeServiceClient\n", "from azure.core._match_conditions import MatchConditions\n", "from azureml.core.compute import ComputeTarget, SynapseCompute\n", "from azure.storage.filedatalake._models import ContentSettings\n", "from msticpy.nbtools import nbinit\n", "\n", "import pandas as pd\n", "from IPython import get_ipython\n", "from IPython.display import display, HTML\n", "from ipywidgets import Dropdown, HBox, IntSlider, Label, Layout\n", "from scipy.stats import ks_1samp, ks_2samp\n", "import warnings\n", "warnings.simplefilter(action='ignore', category=FutureWarning)\n", "\n", "\n", "REQ_PYTHON_VER = \"3.10\"\n", "REQ_MSTICPY_VER = \"2.12.0\"\n", "\n", "display(HTML(\"<h3>Starting Notebook setup...</h3>\"))\n", "nbinit.init_notebook(namespace=globals())\n", "\n", "\n", "WIDGET_DEFAULTS = {\n", " \"layout\": Layout(width=\"95%\"),\n", " \"style\": {\"description_width\": \"initial\"},\n", "}\n", "\n", "#Set pandas options\n", "pd.set_option('display.max_rows', 10)\n", "pd.set_option('max_colwidth', 50)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Configure Azure ML and Azure Synapse Analytics\n", "\n", "If you haven't previously set up the Synapse linked service for AzureML, please use the notebook, [Configurate Azure ML and Azure Synapse Analytics](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/Configurate%20Azure%20ML%20and%20Azure%20Synapse%20Analytics.ipynb), to do so. The notebook will configure an existing Azure Synapse workspace to create and connect to Spark pool. You can then create linked service and connect the AML workspace to the Azure Synapse workspace.<br>\n", "\n", "You will also need to ensure that the `bayespy ~= 0.5.22` Python pacakge is installed on your Spark pool. You can do this by uploading a `requirements.txt` file as detailed in the [docs](https://docs.microsoft.com/azure/synapse-analytics/spark/apache-spark-manage-python-packages#pool-libraries)." ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Authentication to Azure Resources\n", "\n", "We now connect the AML workspace to the Azure Synapse workspace using the linked service.\n", "\n", "> **Note**: Specify the input parameters in below step in order to connect to the Spark attached compute." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1658243188299 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "amlworkspace = '<aml workspace name>' # fill in your AML workspace name\n", "subscription_id = '<subscription id>' # fill in your subscription id\n", "resource_group = '<resource group of AML workspace>' # fill in your resource groups for AML workspace\n", "linkedservice = '<linked service name>' # fill in your linked service created to connect to Synapse workspace" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1658243188890 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Get the aml workspace\n", "aml_workspace = Workspace.get(name=amlworkspace, subscription_id=subscription_id, resource_group=resource_group)\n", "\n", "# Retrieve a known linked service\n", "linked_service = LinkedService.get(aml_workspace, linkedservice)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Start Spark Session\n", "Enter your Synapse Spark compute below. To view details of available Spark computes in the AML UI, please follow these steps: </br>\n", "1. On the AML Studio left menu, navigate to **Linked Services** </br>\n", "2. Click on the name of the Link Service you want to use </br>\n", "3. Select **Spark pools** tab </br>\n", "\n", "> **Note:** The Python contexts for the AML notebooks session and the Spark session are separate - this means that Python all variables defined using the `%%synapse` cell magic are not available in the AML notebook session and vice-versa." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1658243190338 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "available_spark_compute_targets = [compute.name for compute in ComputeTarget.list(aml_workspace) if compute.type == 'SynapseSpark']\n", "synapse_spark_compute_dd = Dropdown(options=available_spark_compute_targets)\n", "HBox([Label('Choose Synapse Spark compute:'), synapse_spark_compute_dd])" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "In order to work with months or years of data in an efficient, scalable way, we make use of Spark's native multi-executor paralellism. \n", "The code in this notebook will scale to any number of nodes, though the optimal performance-vs-cost balance will depend on the volume of your data - 10 executors may be a reasonable starting point. (See [pricing details](https://azure.microsoft.com/pricing/details/synapse-analytics/).)\n", "\n", "> **Note:** Make sure you have selected you Synapse Spark compute from the drop down in the pervious cell _before_ running the cell below" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1658243196909 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "synapse_spark_compute = synapse_spark_compute_dd.value\n", "compute = SynapseCompute(aml_workspace, synapse_spark_compute)\n", "num_executors_slider = IntSlider(\n", " value=compute.min_node_count,\n", " min=compute.min_node_count,\n", " max=compute.max_node_count - 2,\n", " step=1,\n", ")\n", "HBox([Label('Maximum number of executors for Spark session:'), num_executors_slider])" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Now we start the Spark session with the configuration options selected above.\n", "\n", "> **Note:** You can also use the Synapse line/cell magic to start a session if you do not need to expand variables in your spark configuration - e.g. \n", "`%synapse start -s $subscription_id -w $amlworkspace -r $resource_group -c $synapse_spark_compute` \n", "More details are here: [RemoteSynapseMagics class - Azure Machine Learning Python | Microsoft Docs](https://docs.microsoft.com/python/api/azureml-synapse/azureml.synapse.magics.remotesynapsemagics(class)?view=azure-ml-py#azureml-synapse-magics-remotesynapsemagics-synapse)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1658243447221 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Start Spark session\n", "spark_config = {\n", " \"spark.dynamicAllocation.enabled\": \"true\",\n", " \"spark.dynamicAllocation.maxExecutors\": num_executors_slider.value,\n", " \"spark.dynamicAllocation.minExecutors\": compute.min_node_count,\n", " \"spark.shuffle.service.enabled\": \"true\",\n", "}\n", "spark_config_json = json.dumps(spark_config)\n", "get_ipython().run_cell_magic(\n", " 'synapse',\n", " 'start -s $subscription_id -w $amlworkspace -r $resource_group -c $synapse_spark_compute',\n", " spark_config_json\n", ")" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# 2. Run ML on Azure Synapse Spark\n", "\n", "## Overview of ML Approach\n", "\n", "Our novel ML approach begins with the observation that attackers often randomize the same large number of properties simultaneously, resulting in a group of logins occurring periodically over a long period of time with same set of anomalous properties. \n", "Thus, we can attempt to cluster failed sign-ins (most password spray sign-in attempts will fail!) based on the set of properties that are anomalous.<br>\n", "\n", "We use a naive Bayes approach to estimate the likelihood of any given peroperty value ocurring for a legitimate sign-in and then use outlier detection to highlight unlikely values as being \"anomalous\". This gives a dataset in which the rows comprise a (failed) sign-in ID and boolean flags for each sign-in property denoting whether or not that property took an anomalous value. \n", "We model this scenario as a multivariate Bernoulli mixture model, and perform [variational Bayesian inference](https://en.wikipedia.org/wiki/Variational_Bayesian_methods) to detect the presence of latent classes which will be our candidates for low and slow password spray campaigns. \n", "Later, we filter these candidate low and slow clusters by computing various statistics (such as the uniformity of the time-distribution of the sign-ins) and comparing these against what we would expect from a low and slow password spray.\n", "\n", "For more details, see the accompanying Microsoft Tech Community blog post: [Microsoft Sentinel Blog - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bg-p/MicrosoftSentinelBlog).)\n", "\n", "The overall approach looks like this:\n", "\n", "1.\t**Detect anomalous fields for each failed sign-in** attempt using successful sign-ins as a baseline\n", "2.\t**Cluster failed sign-ins** by the columns which were randomized/anomalous\n", "3.\t**Prune the clusters** from the previous step based on knowledge of what a low and slow spray looks like; for example, by removing clusters in which sign-ins do not occur at a steady frequency over an extended period of time\n", "4.\t**Further analyze the candidate password spray clusters** (using threat intelligence enrichments from msticpy, for example), to find any invariant properties within the clusters\n", "5.\t**Identify any successful sign-ins that follow the patterns** observed for each cluster from the previous step and create Sentinel incidents as appropriate" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Having started the Spark session, we can run PySpark code by starting a cell with the `%%synapse` line magic. \n", "Spark and MLlib are written with efficient parallelisation in mind, meaning that data ETL, analysis and ML is hugely distributed by default, allowing for highly scalable workloads.\n", "\n", "**SPARK and MLlib References:** \n", "\n", "- [User Guide — PySpark 3.2.1 documentation (apache.org)](https://spark.apache.org/docs/latest/api/python/user_guide/index.html)\n", "- [Spark SQL — PySpark 3.2.1 documentation (apache.org)](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql.html)\n", "- [MLlib (DataFrame-based) — PySpark 3.2.1 documentation (apache.org)](https://spark.apache.org/docs/latest/api/python/reference/pyspark.ml.html)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "We start by importing the packages we will need for the ML into the current session.\n", "\n", "> **Note:** The Python contexts for the AML notebooks session and the Spark session are separate - this means that Python packages imported using the `%%synapse` cell magic are not imported into the AML notebook session and vice-versa." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# Import packages used for the Spark portion of this notebook\n", "from datetime import timedelta, datetime, date\n", "from itertools import chain\n", "\n", "from pyspark.ml.feature import IndexToString, OneHotEncoder, StringIndexer, VectorAssembler\n", "from pyspark.ml.stat import Summarizer\n", "from pyspark.sql import functions as F\n", "from pyspark.sql.functions import col, lit, pandas_udf\n", "from pyspark.sql.types import *\n", "from pyspark.sql.window import Window\n", "\n", "# NOTE: As per the notebook pre-requisites, ensure that you have `bayespy` installed on your Spark compute!\n", "from bayespy.inference import VB\n", "from bayespy.nodes import Bernoulli, Beta, Categorical, Dirichlet, Mixture\n", "import bayespy.plot as bpplt\n", "from bayespy.utils import random\n", "import pandas as pd\n", "import numpy as np\n", "from scipy.stats import entropy" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Load Data\n", "\n", "Fill in the location details for the ADLS container to which the Sentinel SigninLogs are exported. \n", "\n", "We also specify how much data we want to use with the ML algorithm by specifying an end date and a number of lookback days. Keep in mind that low and slow password sprays take place over long periods (typically months or even years).<br>\n", "You will also need to ensure that sufficient historical log data is actually available in ADLS. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# Primary storage info\n", "account_name = '<storage account name>' # fill in your primary account name\n", "container_name = '<container name>' # fill in your container name\n", "subscription_id = 'd1d8779d-38d7-4f06-91db-9cbc8de0176f' #'<subscription id>' # fill in your subscription id\n", "resource_group = '<resource group>' # fill in your resource groups for ADLS\n", "workspace_name = '<Microsoft sentinel/log analytics workspace name>' # fill in your workspace name\n", "\n", "# Datetime and lookback parameters\n", "end_date = \"<enter date in the format yyyy-MM-dd e.g. 2021-09-17 or datetime.today().strftime('%Y-%m-%d')>\" # fill in your end date\n", "lookback_days = 240 # how many days prior to the end date to include; make sure you have historical data available in ADLS" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "The information from the above cell is used to detemine the ALDS paths for the data we want to load (based on the partition scheme used by the \"continuous data export\" tool in Sentinel)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# Compiling ADLS paths from date strings\n", "def generate_adls_paths(end_date_str: str, lookback_days: int, adls_path: str):\n", " end_date = datetime.strptime(end_date_str, '%Y-%m-%d')\n", " days = [end_date - timedelta(days=i) for i in range(lookback_days + 1)]\n", "\n", " pathlist = []\n", " for day in days:\n", " date_str = day.strftime('%Y-%m-%d').split('-')\n", " day_path = adls_path + f'/y={date_str[0]}/m={date_str[1]}/d={date_str[2]}'\n", " pathlist.append(day_path)\n", "\n", " return pathlist\n", "\n", "# This is the root directory to which data from the Sentinel continuous data export tool is written\n", "adls_path = (\n", " f'abfss://{container_name}@{account_name}.dfs.core.windows.net/WorkspaceResourceId=/'\n", " f'subscriptions/{subscription_id}/'\n", " f'resourcegroups/{resource_group.lower()}/'\n", " f'providers/microsoft.operationalinsights/'\n", " f'workspaces/{workspace_name.lower()}'\n", ")\n", "# This gives a list of ADLS paths from which we want Spark to read (recursively)\n", "per_day_log_paths = generate_adls_paths(end_date, lookback_days, adls_path)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Now we can read the data into a Spark dataframe. It is worth noting that, since the exported log data comprises a separate data file for each 5-minute partition, we may be reading from over 100,000 files. \n", "Therefore, you may wish to increase the maximum number of executors available to the Azure Synapse Spark session - this will allow this operation to be massively parallelized automatically, dramatically reducing time taken.\n", "\n", "### Feature Selection\n", "\n", "Here, we also specify the columns that we want to read into the Spark dataframe. The list suggested below comprises some core sign in properties - `Id`, `UserPrincipalName`, `ResultType`, `TimeGenerated` - and some additional properties (which we refer to as \"features\" for the ML). \n", "The features below have been selected to help spot behaviors that make password sprays stand out, e.g.\n", "\n", "- Features (properties) that an attacker is able to randomise (e.g. IP addresses, location details, user agent-derived fields)\n", "- Features (properties) where the \"normal\" values are concealed from attackers (so are hard for an attacker to guess) (e.g. operating system (included in DeviceDetail), browser, city)\n", "\n", "_(Some features fall into both categories)_" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# Specify the columns to select\n", "column_list = [\n", " \"Id\",\n", " \"UserPrincipalName\",\n", " \"ResultType\",\n", " \"TimeGenerated\",\n", " \"AppDisplayName\",\n", " \"ClientAppUsed\",\n", " \"DeviceDetail\",\n", " \"IPAddress\",\n", " \"Location\",\n", " \"LocationDetails\",\n", " \"IsInteractive\",\n", "]\n", "\n", "# Read the data from ADLS into a Spark dataframe, selecting the columns specified above\n", "try:\n", " df = spark.read.json(per_day_log_paths, recursiveFileLookup=True)\n", "\n", " # AutonomousSystemNumber is a new field and may not yet be available in all logs\n", " # if available, this is preferred for use in analysis over IPAddress\n", " if \"AutonomousSystemNumber\" in df.columns:\n", " column_list += [\"AutonomousSystemNumber\"]\n", "\n", " df = df.select(*column_list)\n", "\n", " #Display the count of records\n", " print(f\"\\n No. of records loaded from the past {lookback_days} days: {df.count()}\")\n", "\n", "except Exception as e:\n", " # If you see \"path doesn't exist\" errors, it may be because the lookback_days parameter is set further back than the amount of historical data available\n", " print(f\"Could not load data due to error:\\n\\t {e}\")" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Data Wrangling using Spark" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Filtering data\n", "\n", "We start by filtering the data set by result types, keeping result types: \n", "- 0 (successful sign in)\n", "- 50055 (expired password)\n", "- 50126 (incorrect username or password)\n", "\n", "The latter two failure errors are the ones most commonly observed as part of password sprays. \n", "\n", "_See [Azure AD Authentication and authorization error codes](https://docs.microsoft.com/azure/active-directory/develop/reference-aadsts-error-codes) for more details._" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# We focus on failed login types 50055 and 50126 as these are the ones most commonly observed in password sprays\n", "df = df.filter(col(\"ResultType\").isin([\"0\", \"50055\", \"50126\"]))\n", "print(f\"Row count after filtering: {df.count()}\")" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Deduplication \n", "\n", "Exported logs may occasionally contain a small amount of duplication either due to the way in which they are collected or due to the data export process (see [data completeness for exported logs](https://docs.microsoft.com/azure/active-directory/develop/reference-aadsts-error-codes)). \n", "In general, duplicate rows should be removed prior to analysis, but in some cases, you may decide to postpone or omit de-duplication if duplicated rows are unlikely to impact your detection logic (especially as de-duping can be a very expensive operation depending on the size of your dataframe and the number of columns that comprise a unique key)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "print(f'# Rows before de-duplication: {df.count()}')\n", "df = df.dropDuplicates(subset=['Id'])\n", "print(f'# Rows after de-duplication: {df.count()}')" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Data Parsing and Extration\n", "\n", "In this step, we\n", "- Create a new column containing the IP prefix (if IP ASN is available, prefer to use this instead)\n", "- Extract the \"browser\", \"displayName\" and \"operatingSystem\" fields from the \"DeviceDetail\" JSON column\n", "- Extract the \"city\", \"state\", \"longitude\" and \"latitude\" fields from the \"LocationDetails\" JSON column" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "@pandas_udf(StringType())\n", "def ip_prefix(s: pd.Series) -> pd.Series:\n", " \"\"\"Given a series of IP address strings, return the first three octets of each IP as a series of strings\"\"\"\n", " return s.apply(lambda ip: '.'.join(ip.split('.')[:3]))\n", "\n", "# Create a new calculated column, \"IPPrefix\", with the first three octets of the IP address\n", "df = df.withColumn('IPPrefix', ip_prefix(col('IPAddress')))\n", "\n", "# Fields to extract from the \"DeviceDetail\" and \"LocationDetails\" JSON columns\n", "device_detail_cols = ['browser', 'displayName', 'operatingSystem']\n", "location_details_cols = ['city', 'state', 'geoCoordinates']\n", "\n", "# Parse the JSON columns and extract the specified fields as new columns\n", "df = (\n", " df\n", " .select('*', F.json_tuple(col('DeviceDetail'), *device_detail_cols).alias(*device_detail_cols))\n", " .select('*', F.json_tuple(col('LocationDetails'), *location_details_cols).alias(*location_details_cols))\n", " .select('*', F.json_tuple(col('geoCoordinates'), 'latitude', 'longitude').alias('latitude', 'longitude'))\n", " .drop('DeviceDetail', 'LocationDetails', 'geoCoordinates')\n", ")\n", "df.show()" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Feature Encoding\n", "\n", "We now [one-hot encode](https://en.wikipedia.org/wiki/One-hot#Machine_learning_and_statistics) our categorical features using Spark's MLlib \n", "(stringing together the [`StringIndexer`](https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.ml.feature.StringIndexer.html#pyspark.ml.feature.StringIndexer) transform followed by the [`OneHotEncoder`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.feature.OneHotEncoder.html) transform).\n" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "First we use the [`StringIndexer`](https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.ml.feature.StringIndexer.html#pyspark.ml.feature.StringIndexer) class to map the categorical feature columns \n", "to columns of category indices. For each column, the indices run from 0 to the number of distinct values observed." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# The columns to be ordinal-encoded (replace IPPrefix with AutonomousSystemNumber if it is available in your data!)\n", "feat_cols = [\"AppDisplayName\", \"ClientAppUsed\", \"browser\", \"displayName\", \"operatingSystem\", \"IPPrefix\", \"Location\", \"state\", \"city\"] # We could include sign-in time (bucketed into hours) here as well\n", "# The output column names for the ordinal-encoded data\n", "ord_enc_cols = [\"OrdEnc_\" + col for col in feat_cols]\n", "\n", "# To missing data is encoded as its own category, we replace empty strings and nulls with \"NODATA\"\n", "df = df.replace(\"\", None, subset=feat_cols).fillna(\"NODATA\", subset=feat_cols)\n", "\n", "# Instantiate, fit, then transform\n", "ordinal_encoder = StringIndexer(inputCols=feat_cols, outputCols=ord_enc_cols, handleInvalid=\"keep\", stringOrderType=\"alphabetAsc\")\n", "ordinal_encoder = ordinal_encoder.fit(df)\n", "encoded_df = ordinal_encoder.transform(df).drop(*feat_cols)\n", "\n", "# Create a list of `IndexToString` objects which can be used to convert category indices back to the original values\n", "ordinal_decoders = [IndexToString(inputCol=in_col, outputCol=out_col, labels=labels) for (in_col, out_col, labels) in zip(feat_cols, ord_enc_cols, ordinal_encoder.labelsArray)]\n", "\n", "encoded_df.show()" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "At this stage, we also split our dataframe into two: one conatining successful sign-ins and the other containing failed sign-ins. Doing this here will be helpful later on." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "success_df = encoded_df.filter(col(\"ResultType\") == \"0\") # Result type 0 indicates a successful login\n", "fail_df = encoded_df.filter(col(\"ResultType\").isin([\"50055\", \"50126\"])) # We focus on failed login types 50055 and 50126 as these are the ones most commonly observed in password sprays\n", "print(\"# Successful logins:\", n_successful:=success_df.count())\n", "print(\"# Failed logins:\", n_failed:=fail_df.count())" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Finally, we use [`OneHotEncoder`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.feature.OneHotEncoder.html) class to convert our ordinal-encoded columns of category indices to one-hot binary vectors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# The output column names for the one-hot encoded columns\n", "ohe_cols = [\"OHE_\" + col for col in feat_cols]\n", "\n", "# Instantiate, fit, then transform\n", "ohe = OneHotEncoder(inputCols=ord_enc_cols, outputCols=ohe_cols, dropLast=True)\n", "ohe = ohe.fit(success_df)\n", "ohe_df = ohe.transform(success_df).drop(*ord_enc_cols)\n", "\n", "ohe_df.show()" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Detect Anomalous Fields for Each Failed Sign-In\n", "\n", "The first step is to apply anomaly detection to each column of each failed sign in - we want to end up with a table that looks like this:\n", "\n", "| Sign-in ID | Is _Country_ anomalous? | Is _City_ anomalous? | Is _OS_ Anomalous? | Is _Browser_ anomalous? | Is _App Display Name_ anomalous? | etc. |\n", "|------------|-------------------------|----------------------|--------------------|-------------------------|----------------------------------|------|\n", "| 1 | **True** | **True** | False | False | False | ... |\n", "| 2 | False | False | False | **True** | **True** | ... |\n", "| 3 | False | **True** | **True** | False | False | ... |\n", "\n", "\n", "We model each of our features as categorical random variables with the categories being the set of unique values observed from all sign in attempts (both successful and failed). We then use Bayesian parameter estimation with the set of successful sign ins to learn the true distributions for \"good” (i.e. non-malicious) sign-in attempts. \n", "(Since we obviously don’t have perfect good vs. malicious labels for all sign-ins, we are using successful sign-ins as a proxy for good sign-ins).\n", "\n", "Mathematically, we model the features as independent categorical variables with symmetric Dirichlet priors with concentration parameter, $\\alpha$. This leads us to estimate the probability of feature $i$ taking value $c$ as\n", "<br><br>\n", "$$\n", "\\hat{p}{i,c} = \\frac{N_{i, c} + \\alpha}{N + \\alpha K_i}\n", "$$\n", "\n", "where $N_{i,c}$ is the number of times that feature $i$ takes the value $c$ in the dataset of successful sign-ins, $N$ is the total number of successful sign-ins, and $K_i$ is the number of available categories of feature $i$ (as observed from both successful and failed sign-ins). <br>\n", "Here, $\\alpha$ acts as a _smoothing parameter_ (see [Additive/Laplace smoothing](https://en.wikipedia.org/wiki/Additive_smoothing)) - increasing $\\alpha$ will cause the algorithm to classify fewer values as being anomalous \n", "(in particular values which haven't been observed in successful sign-ins are less likely to be classed as anomalous).\n", "\n", "### Setting an Anomaly Threshold on Probabilities\n", "\n", "When determining whether values are anomalous, we can't just set a static threshold on the estimated probabilities (i.e. if the likelihood of a value is less than $p$, class it as an anomaly) - what constitues a good threshold will depend on the distribution of the observed values for that feature. \n", "For example, suppose we observe 20 different cities (derived from GeoIP data), and we 95% of successful sign-ins are from city 1, 4% are from city 2, and 1% are from cities 3 - 20. Then, we would probably want to class cities 2 - 20 as anomalous. \n", "Now suppose that we instead observed the following distribution: 24% of successful sign-ins are from city 1, 4$ of successful sign-ins are from each of cities 3-20. In this case, we would not want to class cities 2 - 20 a being anomalous \n", "_even though they are below the same threshold as in the first scenario_, since this would mean saying that 76% of all sign-ins had an anomalous sign-in location - this would make for a very noisy approach!\n", "\n", "Instead, we set thresholds dynamically on a per-feature basis by using basic outlier detection - specifically we set a threshold on $\\log(p)$ where this is more than $k$ standard deviations below the mean ($k = 2$ by default, but can be tuned). This is equivalent to standard-scaling the column of log-probabilities before using a static threshold.\n", "(This threshold can also be given an information-theoretic interpretation, since, for example, $-\\mathbb{E}[\\log(p)]$ is just entropy.)\n" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "First we run the anomalous feature detection algorithm described above - this produces a dataframe in which the rows comprise a (failed) sign-in ID and boolean flags for each sign-in property denoting whether or not that property took an anomalous value. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# Run the anomalous feature detection\n", "\n", "# We could use a library such as scikit -learn's Naive Bayes classifier for this (https://scikit-learn.org/stable/modules/naive_bayes.html)\n", "# but using SPark MLlib directly allows Spark to parallelize this far more efficiently (https://spark.apache.org/mllib/)\n", "\n", "n_feats = len(ohe_cols)\n", "n_categories = np.array([len(labels) for labels in ordinal_encoder.labelsArray])\n", "category_counts = [np.array(vec) for vec in ohe_df.select([Summarizer.numNonZeros(col(c)) for c in ohe_cols]).collect()[0]]\n", "\n", "# We estimate the underlying categorical distribution for each feature (assuming independence of features and using a Dirichlet(alpha) prior)\n", "alpha = 2.0 # smoothing parameter - the concentration parameter from the Dirichlet prior\n", "log_probs = []\n", "for i in range(n_feats):\n", " feat_category_counts = category_counts[i]\n", " n_feat_categories = n_categories[i]\n", " log_probs.append(np.log(feat_category_counts + alpha) - np.log(n_successful + (alpha * n_feat_categories)))\n", "\n", "# We call a value anomalous if it is below a certain probability threshold. However, we cannot just use a static threshold -\n", "# e.g. if each \"IPPrefix\" value only appears once for a given user, then each will have a low probability, but we don't want to class all of the IP prefixes as anomalous\n", "sigmage = 2.0\n", "thresholds = []\n", "for feat_category_counts, feat_log_probs in zip(category_counts, log_probs):\n", " mean = np.average(feat_log_probs, weights=feat_category_counts)\n", " variance = np.dot(feat_category_counts, (feat_log_probs - mean) ** 2) / feat_category_counts.sum()\n", " std = np.sqrt(variance)\n", " thresholds.append(mean - (sigmage * std))\n", "\n", "# For each feature, we can determine whether or not the i-th categorical value is anomalous\n", "anomalous_values_masks = [feat_log_prob < threshold for (feat_log_prob, threshold) in zip(log_probs, thresholds)]\n", "\n", "# Since we have already calculated the {value -> is_anomalous_boolean} mapping, we can create a Spark map type to apply this mapping to our dataframe\n", "spark_maps = [F.create_map([F.lit(x) for x in chain(*enumerate(mask.tolist()))]) for mask in anomalous_values_masks]\n", "is_anom_cols = []\n", "for i, enc_col in enumerate(ord_enc_cols):\n", " new_col_name = 'IsAnom_' + feat_cols[i]\n", " is_anom_cols.append(new_col_name)\n", " fail_df = fail_df.withColumn(new_col_name, spark_maps[i][col(enc_col).cast(IntegerType())])\n", "\n", "# We only \"drop\" columns here for the purpose of the output\n", "fail_df.drop(*ord_enc_cols).show(truncate=False)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Now we collapse our binary \"IsAnom\\_\\*\" columns into a single column of binary vectors representing which features are anomalous for each failed sign-in. (This restructuring of the data will be more convenient for later analysis.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# Collapse the binary \"is_anom_*\" columns into a single column of binary vectors, then convert that column to a 2D numpy array (of shape (n_failed_signins, n_features))\n", "vec_assembler = VectorAssembler(inputCols=is_anom_cols, outputCol=\"anomalous_feats_mask\")\n", "ids, anomalous_feat_masks = np.array(\n", " vec_assembler\n", " .transform(fail_df)\n", " .select(\"Id\", \"anomalous_feats_mask\")\n", " .collect()\n", ").T\n", "\n", "# Convert Spark sparse vectors to numpy arrays\n", "anomalous_feat_masks = np.array([np.array(sparse_vec, dtype=int) for sparse_vec in anomalous_feat_masks])\n", "\n", "# Output a sample of the binary numpy array\n", "print(anomalous_feat_masks[:5])" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Cluster Failed Sign-Ins\n", "\n", "The core hypothesis for this detection algorithm is that the distribution of anomalous features looks very different depending on how the sign-in was generated - \n", "in particular, sign-ins from a password spray campaign in which attackers use tooling to spoof multiple sign-in properties will have a distinctive \"fingerprint\" of features that are often anomalous together.<br>\n", "For example, suppose that all failed sign-in attempts come from three sources: legitimate user error, pssword spray campaign 1 and password spray campaign 2. For each of these classes, the probability of a given feature being anomalous may look like this:\n", "\n", "\n", "| Source | $\\mathbb{P}(\\text{Country is anomalous})$ | $\\mathbb{P}(\\text{City is anomalous})$ | $\\mathbb{P}(\\text{OS is anomalous})$ | $\\mathbb{P}(\\text{Browser is anomalous})$ | $\\mathbb{P}(\\text{App Display Name is anomalous})$ | etc. |\n", "| --------------------- | ----------------------------------------- | -------------------------------------- | ------------------------------------ | ----------------------------------------- | -------------------------------------------------- | ---- |\n", "| Legitimate user error | 0.02 | 0.1 | 0.01 | 0.2 | 0.25 | ... |\n", "| PW Spray 1 | **_0.7_** | **_0.95_** | **_0.6_** | **_0.9_** | **_0.85_** | ... |\n", "| PW Spray 2 | 0.1 | **_0.8_** | 0.1 | **_0.6_** | **_0.8_** | ... |\n", "\n", "\n", "From the hypothetical probabilities in the table, we can see that, for each class of sign-ins, the set of features which are usually anomalous forms a fingerprint for the class.\n", "\n", "Obviously, in practice, the sources of sign-ins are _latent_ variables - i.e. they cannot be observed directly. Instead, we work backwords from our dataset of failed sign-ins and associated anomalous features to try to detect the latent classes and associated probabilities for each feature taking an anomalous value. \n", "From our hypothesis, we hope that, if a password spray campaign is present in our data, it will correspond to one of the detected clusters of failed sign-ins.\n", "\n", "Mathematically, we do this by modelling our dataset as being generated from a [Bernouli mixture model](https://en.wikipedia.org/wiki/Latent_class_model). We then perform [variational Bayesian inference](https://en.wikipedia.org/wiki/Variational_Bayesian_methods) to try to detect the presence of latent classes." ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "In the next cell, We use the `bayespy` Python package to set up the Bernoulli mixture model and run variational Bayesian inference - see [Bernoulli mixture model — BayesPy Documentation](https://www.bayespy.org/examples/bmm.html).\n", "\n", "> **Notes:** \n", "> - Set the number of clusters to look for. The true number of groups is unknown to us, so we use an upper bound for the number of clusters we expect to be present (10 is a resonable number to start with) - the algorithm may assign 0 weight to some clusters if this is too large.\n", "> - This step is not deterministic - rerunning may give slightly different clusterings! If this causes issues, we can simply re-run the variational Bayesian inference multiple times and select the model with the highest ELBO value." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# We will create a Bernoulli mixture model. The true number of groups is unknown to us, so we use a large enough number of clusters.\n", "\n", "# Here, Z defines the group assignments and P, the anomalous feature probability patterns for each group.\n", "\n", "n_clusters = 10 # 10 is a reasonable number to start with - this is the maximum number of cluster the algorithm will be allowed to find\n", "max_iterations = 5000 # Limit the number of iterations of the variational Bayesian inference algorithm\n", "\n", "# We use the categorical distribution for the group assignments and give the group assignment probabilities an uninformative Dirichlet prior (using a small concentration parameter helps avoid learning spurious classes)\n", "R = Dirichlet(n_clusters * [1e-5], name='R')\n", "Z = Categorical(R, plates=(n_failed, 1), name='Z')\n", "\n", "# Each group has a probability of a yes answer for each question. These probabilities are given beta priors (the beta distribution is the conjugate prior for the Bernoulli distribution)\n", "P = Beta([0.5, 0.5], plates=(n_feats, n_clusters), name='P')\n", "\n", "# This is the overall mixture model created from the components defined above\n", "X = Mixture(Z, Bernoulli, P)\n", "\n", "# This is the variational Bayesian inference class object\n", "Q = VB(Z, R, X, P)\n", "\n", "# Perform the inference\n", "P.initialize_from_random()\n", "X.observe(anomalous_feat_masks)\n", "Q.update(repeat=max_iterations, verbose=True)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Visualize Clusters\n", "\n", "We use [Hinton diagrams](https://scipy-cookbook.readthedocs.io/items/Matplotlib_HintonDiagrams.html) to visually represent the learned clusters. Areas of filled squares represent probabilities (and non-filled squared are used to show uncertainty).\n", "\n", "The first digram shows the probabilities that a randomly selected failed sign-in will be assigned to that cluster by our model (the areas of the squares are proportional to the cluster assignment probabilities).\n", "\n", "In the second diagram. columns represent clusters and rows represent features, so, for example, a large white square in the 2nd column, 4th row would indicate that, failed logins in cluster #2 are likely to have an unusual value for feature #4.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# Plot Hinton diagrams\n", "\n", "# Number of clusters that the model has learned\n", "n_clusters_learned = np.count_nonzero(np.exp(R.u))\n", "print(f'{n_clusters_learned} non-empty clusters learned.\\n')\n", "\n", "bpplt.hinton(R)\n", "print(f'\"Size\" of each of the {n_clusters} clusters', '(The areas of the squares are proportional to the cluster assignment probabilities)', sep='\\n')\n", "bpplt.pyplot.show()\n", "print('\\n')\n", "# Use this to retrieve the exact cluster assignment probabilities:\n", "# np.exp(R.u)\n", "\n", "print(\n", " f'Probability that a feature takes an anomalous value per cluster',\n", " f'(Columns represent clusters and rows represent features - e.g., a large white square in the 2nd column, 4th row would indicate that, failed logins in cluster 2 are likely to have an unusual value for \"{feat_cols[3]}\" (feature 4))',\n", " sep='\\n'\n", ")\n", "print('Features (rows):', feat_cols)\n", "bpplt.hinton(P)\n", "bpplt.pyplot.show()\n", "# Use this to retrieve the exact probablities:\n", "# np.exp(P.u)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Prune Clusters\n", "\n", "We first use our learned model to assign each failed sign-in to a cluster along with the associated probability of the sign-in belonging to the cluster." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# Append columns to the dataframe of failed logins diving the assigned cluster number and associated confidence (probability)\n", "\n", "# Extract the cluster assignment probability mass functions for each failed sign-in\n", "# `assignment_pmfs` is an array of shape (n_failed_signins, n_clusters)\n", "assignment_pmfs = Z.u[0][:, 0, :]\n", "\n", "# The assigned cluster for each failed sign-in is the one with the greatest probability\n", "assigned_clusters = assignment_pmfs.argmax(axis=1).tolist() # clusters are indexed from 0 to (n_clusters - 1)\n", "assignment_probs = assignment_pmfs.max(axis=1).tolist()\n", "\n", "# Create a dataframe of (Signin ID, Assigned Cluster, Cluster Assignment Probability) rows\n", "assignments_df = spark.createDataFrame(zip(ids, assigned_clusters, assignment_probs), ('Id', 'cluster_id', 'assignment_probability'))\n", "\n", "# Join the cluster assignment dataframe back to the rest of the sign in data\n", "fail_df = fail_df.join(assignments_df, on='Id', how='inner')\n", "fail_df.drop(*ord_enc_cols).show(5)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Now we do some pruning of the learned clusters to remove those which are unlikely to represent the type of password spray activity we are looking for.\n", "\n", "First we set a confidence threshold to prune failed logins included in each cluster **(intra-cluster pruning)**. We then prune clusters by\n", "\n", "- Setting a minimum size for clusters of interest\n", "- Setting a minimum threshold on the number of features consistently taking anomalous values within a cluster\n", "\n", "> **Note:** The thresholds to use will depend very much on the data on which the algorithm is being run; start low, and increase the thresholds if results are too noisy." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# Determine how many features consistently take anomalous values within each cluster\n", "p = np.exp(P.u)[0, ..., 0] # shape = (n_feats, n_clusters)\n", "consistently_anomalous_threshold = 0.4\n", "n_anomalous_features_per_cluster = (p > consistently_anomalous_threshold).sum(axis=0)\n", "\n", "# Prune clusters which do not have enough anomalous features to look like the type of password spray activity for which we are searching\n", "cluster_min_anomalous_feats = 2\n", "clusters_to_keep = (n_anomalous_features_per_cluster >= cluster_min_anomalous_feats).nonzero()[0].tolist()\n", "\n", "# Prune points within clusters and then remove clusters that are too small\n", "cluster_min_confidence_threshold = 0.4\n", "min_avg_attempts_per_day = 0\n", "cluster_min_size = min_avg_attempts_per_day * lookback_days\n", "\n", "clusters_df = (\n", " fail_df\n", " .filter(col('cluster_id').isin(clusters_to_keep))\n", " .filter(col('assignment_probability') >= cluster_min_confidence_threshold)\n", " .groupby('cluster_id')\n", " .agg(F.count(col('Id')).alias('cluster_size'))\n", " .filter(col('cluster_size') >= cluster_min_size)\n", ")\n", "\n", "# We now have our candidate low and slow clusters! We can filter the failed sign-in logs\n", "candidate_pw_spray_clusters = [row[0] for row in clusters_df.select(col('cluster_id')).collect()]\n", "low_and_slow_candidates = fail_df.filter(col('cluster_id').isin(candidate_pw_spray_clusters))\n", "print('Total number of candidate failed sign in attempts:', low_and_slow_candidates.count())\n", "\n", "clusters_df.show()" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "We now have our candidate low and slow password spray campaigns! These campaigns/clusters will be further pruned when we use `msticpy` for specific analysis and TI enrichment of these campaigns." ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Export Results to ADLS\n", "\n", "At this point, we have all the data that we need fromt the big data analytics and ML steps using Spark, and can write the data back to the data lake before stopping the Spark session to minimize compute cost. \n", "THis will allow the data to be read into the AML notebook context where we further erich, analyze and visualize these outputs before creating writing back to Sentinel.\n", "\n", "The following outputs will be persisted:\n", "\n", "1. **Full _SigninLogs_ rows for candidate password spray sign-ins**\n", "2. **Aggregated sign-in timestamps** - these will be used for some timeseries vizualizations using `msticpy`\n", "3. **Aggregated sign-in locations** - these will be used for geo-plotting using `msticpy`\n", "4. **Various \"baseline\" statistics** - these will be used as part of reporting back to Sentinel\n", "4. **Sample of successful sign-ins** - this will be used in MSTICPy vizualizations and as part of reporting back to Sentinel\n", "\n", "Each of the above outputs will be saved as a single json file in ADLS." ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Export Candidate Password Sprays" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "from pathlib import PurePosixPath\n", "\n", "base_dir_name = 'low_and_slow_pw_spray_ml' # optionally add a suffix if you want to avoid overwriting results from a previous run\n", "base_path = PurePosixPath(adls_path, base_dir_name)\n", "\n", "# Candidate low and slow password sprays\n", "low_and_slow_candidates_path = base_path/'low_and_slow_candidates'\n", "ohe_df.coalesce(1).write.format('json').save(low_and_slow_candidates_path)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Export Aggregated Data/Statistics" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1658225807419 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Aggregated sign-in timestamps\n", "timestamp_df = (\n", " success_df\n", " .select(F.date_trunc('hour', 'TimeGenerated').alias('TimeGenerated'))\n", " .groupBy('TimeGenerated')\n", " .count()\n", " .orderBy('TimeGenerated')\n", ")\n", "signin_day_df = (\n", " success_df\n", " .select(F.dayofweek('TimeGenerated').alias('day_of_week'))\n", " .groupBy('hour')\n", " .count()\n", " .orderBy('hour')\n", ")\n", "signin_hour_df = (\n", " success_df\n", " .select(F.hour('TimeGenerated').alias('hour'))\n", " .groupBy('hour')\n", " .count()\n", " .orderBy('hour')\n", ")\n", "\n", "# Most common successful sign-in locations (for geoplot)\n", "locations_df = (\n", " success_df\n", " .groupBy(['latitude', 'longitude'])\n", " .count()\n", " .na.drop()\n", " .orderBy('count', ascending=False)\n", " .head(5000)\n", ")\n", "\n", "# Entropy (not normalized) per feature (this is a measure of \"variability\")\n", "feat_entropies = dict(zip(feat_cols, entropy(category_counts, axis=1)))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Persist aggregated data\n", "\n", "from pathlib import PurePosixPath\n", "\n", "dataframe_output_dirs = [\n", " # (dataframe, output directory name)\n", " (timestamp_df, 'signin_times'),\n", " (signin_day_df, 'signin_days'),\n", " (signin_hour_df, 'signin_hours'),\n", " (locations_df, 'top_locations'),\n", "]\n", "\n", "for data, dir in dataframe_output_dirs:\n", " path = base_path/dir\n", " data.coalesce(1).write.format('json').save(path)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "import json\n", "with open(str(base_path/'entropy_per_feature.json')) as f:\n", " json.dump(feat_entropies)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Export Baseline Sample" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%%synapse\n", "\n", "# Retain a sample of successful logins to use as a baseline for\n", "baseline_sample_size = 10000\n", "sample_fraction = len(df) // baseline_sample_size\n", "df.sample(fraction=sample_fraction)\n", "\n", "baseline_sample_path = base_path/'successful_login_baseline_sample'\n", "low_and_slow_candidates.coalesce(1).write.format('json').save(baseline_sample_path)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Stop Spark Session" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "%synapse stop" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# 3. Analyze Clusters on AML Compute" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Export results from ADLS to local filesystem" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1652529214937 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "def initialize_storage_account(storage_account_name, storage_account_key):\n", " try:\n", " global service_client\n", " service_client = DataLakeServiceClient(\n", " account_url='{}://{}.dfs.core.windows.net'.format(\n", " 'https', storage_account_name\n", " ),\n", " credential=storage_account_key,\n", " )\n", " except Exception as e:\n", " print(e)\n", "\n", "\n", "def list_directory_contents(container_name, input_path, file_type):\n", " try:\n", " file_system_client = service_client.get_file_system_client(\n", " file_system=container_name\n", " )\n", " paths = file_system_client.get_paths(path=input_path)\n", "\n", " pathlist = []\n", " for path in paths:\n", " pathlist.append(path.name) if path.name.endswith(file_type) else pathlist\n", " return pathlist\n", "\n", " except Exception as e:\n", " print(e)\n", "\n", "\n", "def download_file_from_directory(container_name, input_path, input_file):\n", " try:\n", " file_system_client = service_client.get_file_system_client(\n", " file_system=container_name\n", " )\n", " directory_client = file_system_client.get_directory_client(input_path)\n", " local_file = open('output.json', 'wb')\n", " file_client = directory_client.get_file_client(input_file)\n", " download = file_client.download_file()\n", " downloaded_bytes = download.readall()\n", " local_file.write(downloaded_bytes)\n", " local_file.close()\n", "\n", " except Exception as e:\n", " print(e)\n", "\n", "\n", "def json_normalize(input_file, output_file):\n", " resultList = []\n", " with open(input_file) as f:\n", " for jsonObj in f:\n", " resultDict = json.loads(jsonObj)\n", " resultList.append(resultDict)\n", "\n", " with open(output_file, 'w') as write_file:\n", " json.dump(resultList, write_file)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Load the Data from ADLS\n", "\n", "In below sections, we will provide input details about ADLS account ad then use functions to connect , list contents and download results locally.\n", "\n", "If you need help in locating input details, follow below steps\n", "- Go to the https://web.azuresynapse.net and sign in to your workspace.\n", "- In Synapse Studio, click Data, select the Linked tab, and select the container under Azure Data Lake Storage Gen2.\n", "- Navigate to folder from the container, right click and select Properies.\n", "- Copy ABFSS path , extact the details and map to the input fields\n", "\n", "\n", "You can check [View account access keys](https://docs.microsoft.com/azure/storage/common/storage-account-keys-manage?tabs=azure-portal#view-account-access-keys) doc to find and retrieve your storage account keys for ADLS account.\n", "\n", "<p style=\"border: solid; padding: 5pt; color: black; background-color: #AA4000\">\n", "<b>Warning</b>: If you are storing secrets such as storage account keys in the notebook you should<br>\n", "probably opt to store either into msticpyconfig file on the compute instance or use<br.>\n", "Azure Key Vault to store the secrets.<br>\n", "Read more about using KeyVault\n", "<a href=https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html#specifying-secrets-as-key-vault-secrets >in the MSTICPY docs</a>\n", "</p>" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1652529532348 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Primary storage info\n", "account_name = '<storage account name>' # fill in your primary account name\n", "container_name = '<container name>' # fill in your container name\n", "subscription_id = '<subscription id>' # fill in your subscription id\n", "resource_group = '<resource-group>' # fill in your resource group for your log analytics workspace !!NOT: !!for ADLS account!!\n", "workspace_name = '<Microsoft Sentinel/Log Analytics workspace name>' # fill in your log analytics workspace name\n", "\n", "input_path = (\n", " f'WorkspaceResourceId=/'\n", " f'subscriptions/{subscription_id}/'\n", " f'resourcegroups/{resource_group.lower()}/'\n", " f'providers/microsoft.operationalinsights/'\n", " f'workspaces/{workspace_name.lower()}/'\n", ")\n", "adls_path = f'abfss://{container_name}@{account_name}.dfs.core.windows.net/{input_path}/{workspace_name}'\n", "adls_path = f\"abfss://{container_name}@{account_name}.dfs.core.windows.net/\"\n", "dir_name = 'low_and_slow_pw_spray_ml'\n", "# In production, make sure any keys are stored and retrieved securely (e.g. using Azure Key Vault) - don't store keys in plain text!\n", "account_key = '<storage-account-key>' # replace your storage account key" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1652529851467 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# new_path = input_path + dir_name\n", "import json\n", "\n", "initialize_storage_account(account_name, account_key)\n", "pathlist = list_directory_contents(container_name, dir_name, \"json\")\n", "\n", "for path in pathlist:\n", " path = PurePosixPath(path)\n", " download_file_from_directory(container_name, path.parent, path.name)\n", "\n", "baseline_times = pd.read_json('signin_times/output.json')\n", "baseline_days = pd.read_json('signin_days/output.json')\n", "baseline_hours = pd.read_json('signin_hours/output.json')\n", "top_locations_df = pd.read_json('top_locations/output.json')\n", "baseline_entropies = pd.read_json('entropy_per_feature/output.json')\n", "clusters_df = pd.read_json('clusters/output.json')\n", "success_sample_df = pd.read_json('low_and_slow_candidates/output.json')\n", "cluster_details = pd.read_json('cluster_details/output.json')\n", "n_clusters = len(cluster_details)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "## Analyze Clusters Using MSTICPy\n", "\n", "Having used using big data analytics and ML to slim reduce our SigninLogs data to a handful of candidate low and slow password spray clusters, we are now ready to investigate each of the generated clusters\n", "\n", "The two broad questions to try to answer at this stage are:\n", "- Do the clusters represent likely (low and slow) password spray activity?\n", "- Do the clusters exhibit any distinctive properties that will aid with remediation and/or attribution? (E.g. Do the sign-ins all use an unusual user agent that could be blocked?)<br>\n", " This information can be added to incidents written back to Sentinel.\n", "\n", "In the following section, we use [MSTICPy](https://msticpy.readthedocs.io/en/latest/index.html)'s built-in security analytics tools to better understand each cluster. We only present a few general techniques here - your investigation may lead you down a different route." ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Vizualize Clusters \n", "\n", "The candidate low and slow password spray clusters have been generated based on the mix of features which are typically anomalous. We can plot charts for each candidate cluster showing, for each sign-in property/feature\n", "\n", "1. The number of sign-ins where that property/feature is anomalous\n", "2. The \"variability\" of that property/feature\n", "\n", "Together, these two properties \"fingerprint\" each cluster and can give inform the direction further hunting. For example, suppose a cluster is characterised by its sign-ins having anomalous \"ClientAppUsed\" and \"Location\" peroperties, \n", "and suppose that the \"variability\" for yhese properties is low within the cluster. This indicates, that a relatively small number of anomalous client apps / sign-in locations are being used, which means that there is potential to write a rule-based detection on these static anomalous values." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "is_anom_cols = [col for col in clusters_df.columns if col.startswith('IsAnom_')]\n", "clusters_grouped = clusters_df.groupby('cluster_id')\n", "num_anom_per_cluster = clusters_grouped[is_anom_cols].sum()\n", "num_anom_per_cluster['label'] = '# Anomalous'\n", "entropy_per_cluster = clusters_grouped.apply(lambda g: g.apply(entropy, axis=0)) / baseline_entropies\n", "entropy_per_cluster['label'] = 'Entropy (normalized)'\n", "\n", "bar_charts = []\n", "for i in range(n_clusters):\n", " bar_chart_data = pd.concat(\n", " num_anom_per_cluster[num_anom_per_cluster.cluster_id == i],\n", " entropy_per_cluster[entropy_per_cluster.cluster_id == i],\n", " )\n", " bar_charts.append(bar_chart_data)\n", "\n", "# Plot the i-th chart\n", "i = 0\n", "bar_charts[0]. hvplot.bar(stacked=False, height=500)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Times Series Analysis\n", "\n", "A good indicator of low and slow password spray-like activity is regular patterns in the times of the candidate sign-ins. Although threat actors add some random noise to the schedule on which password spray sign in attempts occur, when viewed as a whole, there is often still a distinctive uniformity to the time series of sign in attempts as attacker endeavour to avoid lock-out.\n", "\n", "In order to test sign-ins in each of our candiadate clusters for \"uniform spread\" over time, we perform a [Kolmogorov-Smirnov goodness-of-fit test](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_1samp.html) against a uniform distribution. The output value will be between 0 and 1, with values closer to zero indicating that sign-in times are unlikely to generated from a uniform distribution." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Output is between 0 and 1; roughly speaking, larger means more uniform.\n", "\n", "def uniformity_metric(times: pd.Series):\n", " # Depending on your data, you might want to remove outliers first!\n", " max_time = times.max().to_numpy()\n", " min_time = times.min().to_numpy()\n", " cdf = lambda x: (x - min_time) / (max_time - min_time)\n", " _, p = ks_1samp(times, cdf)\n", " return p\n", "\n", "# Compute uniformity checks\n", "for i in range(n_clusters):\n", " times = clusters_df[clusters_df.cluster_id == i].TimeGenerated.dropna()\n", " p = uniformity_metric(times)\n", " print(f\"Cluster {i} uniformity: {p}\")" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "Similarly, normal sign-in activity will exhibit distinctive day/week seasonality which we can check for in our candidate low and slow password spray clusters." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Output is between 0 and 1; roughly speaking, larger means that sign-ins from the candidate cluster follow a similar pattern to those from our baseline of successful sign-ins.\n", "\n", "for i in range(n_clusters):\n", " times = clusters_df[clusters_df.cluster_id == i].TimeGenerated.dropna()\n", "\n", " # Test day-of-week seasonality\n", " days = times.dt.dayofweek\n", " _, days_distribution_similarity = ks_2samp(baseline_days, days)\n", " print(f\"Likelihood that sign-in days from cluster {i} have a distribution matching the baseline: {days_distribution_similarity}\")\n", "\n", " # Test hour-of-day seasonality\n", " hours = times.dt.hour\n", " _, hours_distribution_similarity = ks_2samp(baseline_hours, hours)\n", " print(f\"Likelihood that sign-in hours from cluster {i} have a distribution matching the baseline: {hours_distribution_similarity}\")" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "#### Timeseries Plots\n", "\n", "The above statistics does not capture the many different patterns we might see in attacker behaviour (especially as attackers use increasingly sophisticated techniques to avoid detection). \n", "A time-plot vizualization can highlight patterns not captured by analytics.\n", "\n", "Common things to look out for:\n", "\n", "- Sign-in attempts spread out fairly uniformly over time. \n", "- Lack of day/week/month seasonal patterns\n", "- Sign-ins on particularly unusual days (e.g. public holidays)\n", "\n", "You can also modify the plot below to show just the sign-in day-of-week or hour-of-day." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "from msticpy.vis.timeline import display_timeline_values\n", "\n", "min_time_generated = (success_sample_df.TimeGenerated.min()).tz_localize(None)\n", "max_time_generated = (success_sample_df.TimeGenerated.max()).tz_localize(None)\n", "\n", "baseline_times[\"cluster_id\"] = \"BASELINE\"\n", "\n", "time_plot_data = pd.concat([clusters_df[[\"TimeGenerated\", \"cluster_id\"]], baseline_times], axis=0)\n", "time_plot_data = time_plot_data.groupby([pd.Grouper(key=\"TimeGenerated\", freq=\"2T\"), \"cluster_id\"], as_index=False).size()\n", "display_timeline_values(\n", " data=time_plot_data,\n", " y=\"size\",\n", " group_by=\"cluster_id\",\n", ")" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Sign-In Location Analysis\n", "\n", "We can use msticpy's visualisation libraries to plot locations on a map. This can be particularily useful when looking at the distribution of anomalous sign in attempts.\n", "\n", "<div style=\"border: solid; padding: 5pt\"><b>Note:</b>\n", " If your logs source does not include GeoIP data, you can use msticpy's geolocation capabilities using the maxmind database. You will need a maxmind API key to download the database.\n", " <br>\n", " You may see the GeoLite driver downloading its database the first time you run this.\n", "</div>\n", "<details>\n", " <summary>Learn more about MSTICPy GeoIP providers...</summary>\n", " <p>\n", " <a href=https://msticpy.readthedocs.io/en/latest/data_acquisition/GeoIPLookups.html >MSTICPy GeoIP Providers</a>\n", " </p>\n", "</details>\n", "<br>\n", "\n", "We use two plots to answer two questions in this section:\n", "\n", "1. Are sign-in attempts generally from unusual locations as compared to the baseline successful sign-ins?\n", "2. Can we learn anything more specific about where sign-ins for each clusters are coming from?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Plot the locations of sign-ins per cluster against the baseline sample of successful sign-ins\n", "\n", "common_args = dict(x=\"longitude\", y=\"latitude\", height=500, width=900)\n", "display(\n", " clusters_df.hvplot.scatter(\n", " **common_args,\n", " title=\"Sign-in Locations by Cluster/Baseline\",\n", " color=\"orange\",\n", " by=\"cluster_id\",\n", " alpha=0.3\n", " )\n", " * top_locations_df.hvplot.scatter(**common_args, color=\"green\", alpha=0.3, size=10)\n", ")\n", "md(\"Successful sign-in locations in green.\", \"bold\")\n", "md(\"Note: Fainter dots indicates fewer logons, brighter color indicates multiple logons.\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "# Using MSTICPy's Folium Map integration for interactive geo-plotting capabilities, we can dig a bit further into the sign-in locations per cluster\n", "\n", "clusters_df.mp_plot.folium_map(\n", " lat_column=\"latitude\",\n", " long_column=\"longitude\",\n", " layer_column='cluster_id'\n", " zoom_start=1,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Threat Intelligence Enrichment\n", "\n", "In this step, we can perform threatintel lookup using msticpy and open source TI providers such as IBM Xforce, VirusTotal, Greynoise etc. \n", "The below examples show TI lookups on single IP as well as a bulk lookup on all ips using IBM Xforce TI Provider. \n", "<br>You will need to register with IBM Xforce and enter API keys into `mstipyconfig.yaml`\n", "\n", "<details>\n", " <summary>Learn more...</summary>\n", " <p>\n", " </p>\n", " <ul>\n", " <li>More details are shown in the <i>A Tour of Cybersec notebook features</i> notebook</li>\n", " <li><a href=https://msticpy.readthedocs.io/en/latest/data_acquisition/TIProviders.html >Threat Intel Lookups in MSTICPy</a></li>\n", " <li> To learn more about adding TI sources, see the TI Provider setup in the <i>A Getting Started Guide For Microsoft Sentinel ML Notebooks</i> notebook\n", " </ul>\n", "</details>\n", "<br>" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "from msticpy import TILookup\n", "\n", "ti_lookup = TILookup()\n", "# Perform lookup on a single IOC\n", "result = ti_lookup.lookup_ioc(observable=\"52.183.120.194\", providers=[\"XForce\"])\n", "ti_lookup.result_to_df(result)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Whois registration enrichment\n", "In this step, we can perform whois lokup on all public destination ips and populate additional information such as ASN. You can use this output to further filter known ASNs from the results." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "from msticpy.context.ip_utils import get_whois_info\n", "\n", "num_ips = len(df[\"DestinationIP\"].unique())\n", "print(f\"Performing WhoIs lookups for {num_ips} IPs \", end=\"\")\n", "df[\"DestASN\"] = df.apply(lambda x: get_whois_info(x.DestinationIP, True), axis=1)\n", "df[\"DestASNFull\"] = df.apply(lambda x: x.DestASN[1], axis=1)\n", "df[\"DestASN\"] = df.apply(lambda x: x.DestASN[0], axis=1)\n", "\n", "#Display results\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "### Other\n", "\n", "There is a lot more data available in the SigninLogs table that we haven't looked at. Using the MSTICPy `DataViewer` control below, you can interactively inspect your raw data to see if anything stands out.\n", "\n", "**Every security investigation is different, and will depend heavily on your data and environment.** There are many more tools (including those in MSTICPy) that you may wish to use to further your investigation. Take a look at our [guided hunting blog post]() and the [MSTICPy notebook examples](https://msticpy.readthedocs.io/en/latest/notebooksamples.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "from msticpy.vis.data_viewer import DataViewer\n", "DataViewer(clusters_df)" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# 4. Create Sentinel Incidents\n", "\n", "To support security analysts to respond to these candidate password spray events, we create custom incidents in the Sentinel workspace.\n", "\n", "MSTICPy has built-in support for reading from, and writing to, Microsoft Sentinel. Using the provided API, we first create a single incident to indicating potential low and slow password spray activity. \n", "We then add comments to the incident giving details of the each candidate campaign, including details of machines affected. This makes it easy for security analysts to make use of the outputs of this ML notebook to take further action as appropriate.\n", "\n", "You may wish to modify the structure of the incidents written back to Sentinel based on your team's workflow." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1649061232845 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "from msticpy.context.azure.sentinel_core import MicrosoftSentinel\n", "\n", "sentinel = MicrosoftSentinel(\n", " sub_id=\"1fc4ff85-c4cd-48f5-a9e4-165751ccc023\",\n", " res_grp=\"soc-mstic-play\",\n", " ws_name=\"dummyloganalyticsws\",\n", ")\n", "sentinel.connect()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1648824183408 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "for cluster_df in cluster_details.values():\n", " sentinel.create_incident(\n", " title=\"Potential Low and Slow Password Spray Activity\",\n", " severity=\"Low\",\n", " first_activity_time=cluster_df.TimeGenerated.min(),\n", " last_activity_time=cluster_df.TimeGenerated.max(),\n", " description=cluster_df.to_string()\n", " )" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "gather": { "logged": 1649062444515 }, "jupyter": { "outputs_hidden": false, "source_hidden": false }, "nteract": { "transient": { "deleting": false } } }, "outputs": [], "source": [ "incidents = sentinel.list_incidents()\n", "created_incident_id = incidents[\n", " incidents[\"properties.title\"] == \"Demo - Potential password spray campaign\"\n", "].sort_values(by=\"properties.incidentNumber\", ascending=False).name[0]\n", "\n", "html_data = cluster_df.loc[cluster_df.index != \"Id\"].to_html(header=False)\n", "sentinel.post_comment(\n", " incident_id=created_incident_id,\n", " comment=html_data,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "nteract": { "transient": { "deleting": false } } }, "source": [ "# Conclusion\n", "\n", "Due to the nature of low and slow password sprays, we needed to start our hunting on very large datasets of historical sign in logs. The sheer scale of data made Spark a great tool to allow us to easily perform distributed data operations at scale. \n", "We then executed several analytical queries to surface series of failed sign in attempts with high IP volatility based on known patterns used by attackers.\n", "In order to analyze this data further, we use msticpy's data enrichment and visualization capabilities\n", "\n", "Analysts can perform further investigation and can then create incidents in Microsoft Sentinel and track investigations in Sentinel. \n", "Details of possible next steps to take are in the accompanying Microsoft Tech Community blog post: [Microsoft Sentinel Blog - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bg-p/MicrosoftSentinelBlog). \n", "For more information on hunting and incident response playbooks for password sprays, please see [Password spray investigation | Microsoft Docs](https://docs.microsoft.com/security/compass/incident-response-playbook-password-spray)." ] } ], "metadata": { "kernel_info": { "name": "python38-azureml" }, "kernelspec": { "display_name": "Python 3.8 - AzureML", "language": "python", "name": "python38-azureml" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" }, "microsoft": { "language": "python" }, "nteract": { "version": "nteract-front-end@1.0.0" } }, "nbformat": 4, "nbformat_minor": 2 }